Search results for "Multi-task Learning"
showing 6 items of 6 documents
Automatic Quality Assessment of Cardiac MR Images with Motion Artefacts using Multi-task Learning and K-Space Motion Artefact Augmentation
2022
The movement of patients and respiratory motion during MRI acquisition produce image artefacts that reduce the image quality and its diagnostic value. Quality assessment of the images is essential to minimize segmentation errors and avoid wrong clinical decisions in the downstream tasks. In this paper, we propose automatic multi-task learning (MTL) based classification model to detect cardiac MR images with different levels of motion artefact. We also develop an automatic segmentation model that leverages k-space based motion artefact augmentation (MAA) and a novel compound loss that utilizes Dice loss with a polynomial version of cross-entropy loss (PolyLoss) to robustly segment cardiac st…
Emergency Analysis: Multitask Learning with Deep Convolutional Neural Networks for Fire Emergency Scene Parsing
2021
In this paper, we introduce a novel application of using scene semantic image segmentation for fire emergency situation analysis. To analyse a fire emergency scene, we propose to use deep convolutional image segmentation networks to identify and classify objects in a scene based on their build material and their vulnerability to catch fire. We introduce our own fire emergency scene segmentation dataset for this purpose. It consists of real world images with objects annotated on the basis of their build material. We use state-of-the-art segmentation models: DeepLabv3, DeepLabv3+, PSPNet, FCN, SegNet and UNet to compare and evaluate their performance on the fire emergency scene parsing task. …
Learning from good examples
1995
The usual information in inductive inference for the purposes of learning an unknown recursive function f is the set of all input /output examples (n,f(n)), n ∈ ℕ. In contrast to this approach we show that it is considerably more powerful to work with finite sets of “good” examples even when these good examples are required to be effectively computable. The influence of the underlying numberings, with respect to which the learning problem has to be solved, to the capabilities of inference from good examples is also investigated. It turns out that nonstandard numberings can be much more powerful than Godel numberings.
Class Noise and Supervised Learning in Medical Domains: The Effect of Feature Extraction
2006
Inductive learning systems have been successfully applied in a number of medical domains. It is generally accepted that the highest accuracy results that an inductive learning system can achieve depend on the quality of data and on the appropriate selection of a learning algorithm for the data. In this paper we analyze the effect of class noise on supervised learning in medical domains. We review the related work on learning from noisy data and propose to use feature extraction as a pre-processing step to diminish the effect of class noise on the learning process. Our experiments with 8 medical datasets show that feature extraction indeed helps to deal with class noise. It clearly results i…
A Bayesian-optimal principle for learner-friendly adaptation in learning games
2010
Abstract Adaptive learning games should provide opportunities for the student to learn as well as motivate playing until goals have been reached. In this paper, we give a mathematically rigorous treatment of the problem in the framework of Bayesian decision theory. To quantify the opportunities for learning, we assume that the learning tasks that yield the most information about the current skills of the student, while being desirable for measurement in their own right, would also be among those that are efficient for learning. Indeed, optimization of the expected information gain appears to naturally avoid tasks that are exceedingly demanding or exceedingly easy as their results are predic…
On the duality between mechanistic learners and what it is they learn
1993
All previous work in inductive inference and theoretical machine learning has taken the perspective of looking for a learning algorithm that successfully learns a collection of functions. In this work, we consider the perspective of starting with a set of functions, and considering the collection of learning algorithms that are successful at learning the given functions. Some strong dualities are revealed.